Multi-state perceptrons: learning rule and perceptron of maximal stability

نویسنده

  • E. Elizalde
چکیده

A new perceptron learning rule which works with multilayer neural networks made of multi-state units is obtained, and the corresponding convergence theorem is proved. The deenition of perceptron of maximal stability is enlarged in order to include these new multi-state perceptrons, and a proof of existence and uniqueness of such optimal solutions is outlined.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A learning rule for very simple universal approximators consisting of a single layer of perceptrons

One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with valu...

متن کامل

Reducing Communication for Distributed Learning in Neural Networks

A learning algorithm is presented for circuits consisting of a single layer of perceptrons. We refer to such circuits as parallel perceptrons. In spite of their simplicity, these circuits are universal approximators for arbitrary boolean and continuous functions. In contrast to backprop for multi-layer perceptrons, our new learning algorithm – the parallel delta rule (p-delta rule) – only has t...

متن کامل

Growing Layers of Perceptrons : Introducing the Extentron Algorithm

The ideas presented here are based on two observations of perceptrons: (1) when the perceptron learning algorithm cycles among hyperplanes, the hyperplanes may be compared to select one that gives a best split of the examples, and (2) it is always possible for the perceptron to build a hyper-plane that separates at least one example from all the rest. We describe the Extentron which grows multi...

متن کامل

The Efficiency and the Robustness of Natural Gradient Descent Learning Rule

The inverse of the Fisher information matrix is used in the natural gradient descent algorithm to train single-layer and multi-layer perceptrons. We have discovered a new scheme to represent the Fisher information matrix of a stochastic multi-layer perceptron. Based on this scheme, we have designed an algorithm to compute the natural gradient. When the input dimension n is much larger than the ...

متن کامل

Finite convergence of a fuzzy δ rule for a fuzzy perceptron

A learning algorithm based on a fuzzy δ rule is proposed for a fuzzy perceptron with the same topological structure as conventional linear perceptrons. The inner operations involved in the working process of this fuzzy perceptron are based on the max-min logical operations rather than conventional multiplication and summation etc. The initial values of the network weights are fixed as 1. Each v...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007